LAB: Linear Regression on Whiteside data

Published

February 23, 2025

M1 MIDS/MFA/LOGOS

Université Paris Cité

Année 2024

Course Homepage

Moodle

Introduction

The purpose of this lab is to introduce linear regression using base R and the tidyverse. We work on a dataset provided by the MASS package. This dataset is investigated in the book by Venables and Ripley. This discusssion is worth being read. Our aim is to relate regression as a tool for data exploration with regression as a method in statistical inference. To perform regression, we will rely on the base R function lm() and on the eponymous S3 class lm. We will spend time understanding how the formula argument can be used to construct a design matrix from a dataframe representing a dataset.

Packages installation and loading (again)

Code
# We will use the following packages. 
# If needed, install them : pak::pkg_install(). 
stopifnot(
  require("magrittr"),
  require("lobstr"),
  require("ggforce"),
  require("patchwork"), 
  require("gt"),
  require("glue"),
  require("skimr"),
  require("corrr"),
  require("GGally"),
  require("broom"),
  require("tidyverse")
)

Besides the tidyverse, we rely on skimr to perform univariate analysis, GGally::ggpairs to perform pairwise (bivariate) analysis. Package corrr provide graphical tools to explore correlation matrices. At some point, we will showcase the exposing pipe %$% and the classical pipe %>% of magrittr. We use gt to display handy tables, patchwork to compose graphical objects. glue provides a kind of formatted strings. Package broom proves very useful when milking lienar models produced by lm() (and many other objects produced by estimators, tests, …)

Dataset

The dataset is available from package MASS. MASS can be downloaded from cran.

Code
whiteside <- MASS::whiteside # no need to load the whole package

cur_dataset <- str_to_title(as.character(substitute(whiteside)))
# ?whiteside

The documentation of R tells us a little bit more about this data set.

Mr Derek Whiteside of the UK Building Research Station recorded the weekly gas consumption and average external temperature at his own house in south-east England for two heating seasons, one of 26 weeks before, and one of 30 weeks after cavity-wall insulation was installed. The object of the exercise was to assess the effect of the insulation on gas consumption.

This means that our sample is made of 56 observations. Each observation corresponds to a week during heating season. For each observation. We have the average external temperature Temp (in degrees Celsius) and the weekly gas consumption Gas. We also have Insul which tells us whether the observation has been recorded Before or After treatment.

Temperature is the explanatory variable or the covariate. The target/response is the weekly Gas Consumption. We aim to predict or to explain the variations of weekly gas consumption as a function average weekly temperature.

The question is wether the treatment (insulation) modifies the relation between gas consumption and external temperature, and if we conclude that the treatment modifies the relation, in which way?.

Have a glimpse at the data.

Code
whiteside %>% 
  glimpse
Rows: 56
Columns: 3
$ Insul <fct> Before, Before, Before, Before, Before, Before, Before, Before, …
$ Temp  <dbl> -0.8, -0.7, 0.4, 2.5, 2.9, 3.2, 3.6, 3.9, 4.2, 4.3, 5.4, 6.0, 6.…
$ Gas   <dbl> 7.2, 6.9, 6.4, 6.0, 5.8, 5.8, 5.6, 4.7, 5.8, 5.2, 4.9, 4.9, 4.3,…

Even though the experimenter, Mr Whiteside, decided to apply a treatment to his house. This is not exactly what we call experimental data. Namely, the experimenter has no way to clamp the external temperature. With respect to the Temperature variable (the explanatory variable) we are facing observational data.

Columnwise exploration

Question

Before before proceeding to linear regressions of Gas with respect to Temp, perform univariate analysis on each variable.

  • Compute summary statistics
  • Build the corresponding plots
Solution

skimr does the job. There are no missing data, complete rate is always 1, we remove non-informative columns from the output.

Code
whiteside |>
  skimr::skim() |>
  select(-n_missing, -complete_rate, -factor.ordered, - factor.n_unique) |>
  gt() |>
  gt::fmt_number(decimals=1) |>
  gt::tab_caption("Whiteside dataset. Columnwise summaries")
Whiteside dataset. Columnwise summaries
skim_type skim_variable factor.top_counts numeric.mean numeric.sd numeric.p0 numeric.p25 numeric.p50 numeric.p75 numeric.p100 numeric.hist
factor Insul Aft: 30, Bef: 26 NA NA NA NA NA NA NA NA
numeric Temp NA 4.9 2.7 −0.8 3.1 4.9 7.1 10.2 ▃▅▇▇▃
numeric Gas NA 4.1 1.2 1.3 3.5 4.0 4.6 7.2 ▁▆▇▂▁

An alternative way of doing this univariate analysis consists in separating categorical variables from numerical ones.

Code
sk <- whiteside %>% 
  skimr::skim() %>% 
  select(-n_missing, - complete_rate, -factor.ordered, - factor.n_unique)

skimr::yank(sk, "factor") |> 
  gt() |>
  gt::tab_caption("Whiteside dataset. Categorical variables summaries")


skimr::yank(sk, "numeric") |> 
  gt() |>
  gt::fmt_number(decimals=1) |>
  gt::tab_caption("Whiteside dataset. Categorical variables summaries")

We may test the normality of the Gas and Temp distribution

Var statistic p.value method
Temp 0.98 0.48 Shapiro-Wilk normality test
Gas 0.96 0.08 Shapiro-Wilk normality test

Both variables pass the Shapiro-Wilk test with flying colors.

We may use the Jarque-Bera test See R bloggers on this

Registered S3 method overwritten by 'quantmod':
  method            from
  as.zoo.data.frame zoo 
Whiteside dataset
Var statistic p.value parameter method
Temp 1.37 0.50 2.00 Jarque Bera Test
Gas 2.74 0.25 2.00 Jarque Bera Test

Both variables also pass the Jarque-Bera test with flying colors. This is noteworthy since the Jarque-Bera compares empirical skewness and kurtosis to Gaussian skewness and kurtosis.

Pairwise exploration

Question

Compare distributions of numeric variables with respect to categorical variable Insul

Solution

We start by plotting histograms

To abide to the DRY principle, we take advantage of the fact that aesthetics and labels can be tuned incrementally.

Code
p_xx <- whiteside |>
    ggplot() +
      geom_histogram(
        mapping=aes(fill=Insul, color=Insul, y=after_stat(density)),
        position="dodge",
        alpha=.1,
        bins=10) 

p_1 <- p_xx + 
  aes(x=Temp)  +
  theme(legend.position = "None")+
  labs(title="External Temperature",
       subtitle = "Weekly Average (Celsius)")

p_2 <- p_xx + 
  aes(x=Gas)  +
  labs(title="Gas Consumption" ,
       subtitle="Weekly Average")

# patchwork the two graphical objects 
(p_1 + p_2) +
  patchwork::plot_annotation(
    caption="Whiteside dataset from package MASS"
  )

Note the position parameter for geom_histogram(). Check the result for position="stack", position="identity". Make up your mind about the most convenient choice.

Gas Consumption distribution After looks shifted with respect to distribution Before.

Another visualization strategy consists in using the faceting mechanism

Code
r <- whiteside %>% 
  pivot_longer(cols=c(Gas, Temp),
              names_to = "Vars",
              values_to = "Vals") %>% 
  ggplot() +
  aes(x=Vals)  +
  facet_wrap(~ Insul + Vars ) + 
  xlab("")

r +
  aes(y=after_stat(density)) +
  geom_histogram(alpha=.3, fill="white", color="black", bins=6) +
  ggtitle(glue("{cur_dataset} data"))

Covariance and correlation between Gas and Temp

Question

Compute the covariance matrix of Gas and Temp

Solution
Code
mu_n <- whiteside %>% 
  select(where(is.numeric)) %>% 
  colMeans()

\[ C_n = \begin{bmatrix} 7.56 & -2.19\\ -2.19 & 1.36 \end{bmatrix} \qquad \mu_n = \begin{bmatrix} 4.88\\ 4.07 \end{bmatrix} \]

Solution
Code
# Compute the covariance matrix for Gas and Temp
C <- whiteside |>
  select(where(is.numeric)) |> 
  cov()

Computing correlations and correlations per groups is revealing.

Code
# use magrittr pipe %>%  to define a pipeline function
my_cor <-  . %>% 
  summarize(
    pearson= cor(Temp, Gas, method="pearson"),
    kendall=cor(Temp, Gas, method="kendall"),
    spearman=cor(Temp,Gas, method="spearman")
  ) 

t1 <- whiteside |>
  group_by(Insul) |> 
  my_cor()


t2 <- whiteside |>
  my_cor() |>
  mutate(Insul="pooled")

bind_rows(t1, t2)  |> 
    gt() |>
    gt::fmt_number(decimals=2) |>
    gt::tab_caption("Whiteside data: correlations between Gas and Temp")
Whiteside data: correlations between Gas and Temp
Insul pearson kendall spearman
Before −0.97 −0.85 −0.96
After −0.90 −0.72 −0.86
pooled −0.68 −0.47 −0.62

Note the sharp increase of all correlation coefficients we data are grouped according to the control/treatment variable (Insul).

Use ggpairs from GGally to get a quick overview of the pairwise interactions.

Code
whiteside |>
  GGally::ggpairs()
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.

Question

Build a scatterplot of the Whiteside dataset

Solution
Code
p <- whiteside %>% 
  ggplot() +
  aes(x=Temp, y=Gas) +
  geom_point(aes(shape=Insul)) +
  xlab("Average Weekly Temperature (Celsius)") +
  ylab("Average Weekly Gas Consumption 1000 cube feet") +
  labs(
    ## Use list unpacking
  )

p + 
  ggtitle(glue("{cur_dataset} dataset"))

Note that the dataset looks like the stacking of two bananas corresponding to the two heating seasons.

Question

Build boxplots of Temp and Gas versus Insul

Solution
Code
q <- whiteside %>% 
  ggplot() +
  aes(x=Insul)

qt <- q + 
  geom_boxplot(aes(y=Temp))

qg <- q + 
  geom_boxplot(aes(y=Gas))

(qt + qg) +
  patchwork::plot_annotation(title = glue("{cur_dataset} dataset"))

Note the two low extremes/outliers for the Gas Consumption after Insulation.

Question

Build violine plots of Temp and Gas versus Insul

Solution
Code
(q + 
  geom_violin(aes(y=Temp))) +
(q + 
  geom_violin(aes(y=Gas))) +
  patchwork::plot_annotation(title = glue("{cur_dataset} dataset"))

Question

Plot density estimates of Temp and Gas versus Insul.

Solution
Code
r +
  stat_density(alpha=.3 , 
               fill="grey", 
               color="black", 
#               bw = "SJ",
               adjust = .5 ) +
  ggtitle(glue("{cur_dataset} data"))

Hand-made calculation of simple linear regression estimates for Gas versus Temp

Question

Compute slope and intercept using elementary computations

Solution
Code
slope <- C[1,2] / C[1,1] # slope 

intercept <- whiteside %$%  # exposing pipe from magrittr
  (mean(Gas) - slope * mean(Temp)) # intercept

# with(whiteside,
#     mean(Gas) - b * mean(Temp)) 

In simple linear regression, the slope follows from the covariance matrix in a straightforward way. The slope can also be expressed as the linear correlation coefficient (Pearson) times the ration between the standard deviation of the response variable and the standard deviation of the explanatory variable.

Question

Overlay the scatterplot with the regression line.

Solution
Code
p + 
  geom_abline(slope=slope, intercept = intercept) +
  ggtitle(glue("{cur_dataset} data"), subtitle = "Least square regression line")

Using lm()

lm stands for Linear Models. Function lm has a number of arguments, including:

  • formula
  • data
Question

Use lm() to compute slope and intercept

Solution
Code
lm0 <- lm(Gas ~ Temp, data = whiteside)

The result is an object of class lm.

The generic function summary() has a method for class lm

Code
lm0 %>% 
  summary()

Call:
lm(formula = Gas ~ Temp, data = whiteside)

Residuals:
    Min      1Q  Median      3Q     Max 
-1.6324 -0.7119 -0.2047  0.8187  1.5327 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)   5.4862     0.2357  23.275  < 2e-16 ***
Temp         -0.2902     0.0422  -6.876 6.55e-09 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.8606 on 54 degrees of freedom
Multiple R-squared:  0.4668,    Adjusted R-squared:  0.457 
F-statistic: 47.28 on 1 and 54 DF,  p-value: 6.545e-09

The summary is made of four parts

  • The call. Very useful if we handle many different models (corresponding to different formulae, or different datasets)
  • A numerical summary of residuals
  • A commented display of the estimated coefficients
  • Estimate of noise scale (under Gaussian assumptions)
  • Squared linear correlation coefficient between response variable \(Y\) (Gas) and predictions \(\widehat{Y}\)
  • A test statistic (Fisher’s statistic) for assessing null hypothesis that slope is null, and corresponding \(p\)-value (under Gaussian assumptions).

Including a rough summary in a report is not always a good idea. It is easy to extract tabular versions of the summary using functions tidy() and glance() from package broom.

For html output gt::gt() allows us to polish the final output

Solution

We can use the exposing pipe from magrittr (or the with construct from base R) to build a function that extracts the coefficients estimates, standard error, \(t\)-statistic and associated p-values.

Code
tidy_lm <- . %$% (    # <1>  The lhs is meant to be of class lm
  tidy(.)  %>%         # <2> . acts as a pronoun for magrittr pipes     
  gt::gt() %>% 
  gt::fmt_number(decimals=2) %>% 
  gt::tab_caption(glue("Linear regrression. Dataset: {call$data},  Formula: {deparse(call$formula)}"))  # <3> call is evaluated as a member of the pronoun `.`
)

tidy_lm(lm0)
Linear regrression. Dataset: whiteside, Formula: Gas ~ Temp
term estimate std.error statistic p.value
(Intercept) 5.49 0.24 23.28 0.00
Temp −0.29 0.04 −6.88 0.00

deparse() is an important function from base R. It is very helpful when trying to take advantage of lazy evaluation mechanisms.

TODO: more on glue()

Question

Function glance() extract informations that can be helpful when performing model/variable selection.

Solution

The next chunk handles several other parts of the summary.

Code
glance_lm <-  . %$% (
  glance(.) %>% 
  mutate(across(-c(p.value), 
                ~ round(.x, digits=2)),
         p.value=signif(p.value,3)) %>% 
  gt::gt() %>% 
  gt::tab_caption(glue("Dataset: {call$data},  Formula: {deparse(call$formula)}"))
)

glance_lm(lm0)
Dataset: whiteside, Formula: Gas ~ Temp
r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC deviance df.residual nobs
0.47 0.46 0.86 47.28 6.55e-09 1 -70.04 146.07 152.15 39.99 54 56
  • r.squared (and adj.r.squared)
  • sigma estimates the noise standard deviation (under homoschedastic Gaussian noise assumption)
  • statistic is the Fisher statistic used to assess the hypothesis that the slope (Temp coefficient) is zero. It is compared with quantiles of Fisher distribution with 1 and 54 degrees of freedom (check pf(47.28, df1=1, df2=54, lower.tail=F) or qf(6.55e-09, df1=1, df2=54, lower.tail=F)).
Question

R offers a function confint() that can be fed with objects of class lm. Explain the output of this function.

Solution

Under the Gaussian homoschedastic noise assumption, confint() produces confidence intervals for estimated coefficients. Using the union bound, we can derive a conservative confidence rectangle.

Code
M <- confint(lm0, level=.95)  

as_tibble(M) |>
  mutate(Coeff=rownames(M)) |>
  relocate(Coeff) |>
  gt::gt() |>
  gt::fmt_number(decimals=2)
Coeff 2.5 % 97.5 %
(Intercept) 5.01 5.96
Temp −0.37 −0.21
Question

Plot a \(95\%\) confidence region for the parameters (assuming homoschedastic Gaussian noise).

Solution

Diagnostic plots

Method plot.lm() of generic S3 function plot from base R offers six diagnostic plots. By default it displays four of them.

Question

What are the diagnostic plots good for?

Solution
Code
plot(lm0)  

The motivation and usage of diagnostic plots is explained in detail in the book by Fox and Weisberg: An R companion to applied regression.

The diagnostic plots can be built from the information gathered in the lm object returned by lm(...).

It is convenient to extract the required pieces of information using method augment.lm. of generic function augment() from package broom.

Solution
Code
whiteside_aug <- lm0 %>% 
  augment(whiteside)

lm0 %$% ( # exposing pipe !!! 
  augment(., data=whiteside) %>% 
  mutate(across(!where(is.factor), ~ signif(.x, 3))) %>% 
  group_by(Insul) %>% 
  sample_n(5) %>% 
  ungroup() %>% 
  gt::gt() %>% 
  gt::tab_caption(glue("Dataset {call$data},  {deparse(call$formula)}"))
)
Dataset whiteside, Gas ~ Temp
Insul Temp Gas .fitted .resid .hat .sigma .cooksd .std.resid
Before 0.4 6.4 5.37 1.030 0.0660 0.856 0.05420 1.240
Before -0.8 7.2 5.72 1.480 0.0953 0.842 0.17300 1.810
Before 3.6 5.6 4.44 1.160 0.0218 0.854 0.02060 1.360
Before 8.0 4.0 3.16 0.835 0.0413 0.861 0.02120 0.992
Before 3.2 5.8 4.56 1.240 0.0246 0.851 0.02700 1.460
After 7.5 2.6 3.31 -0.710 0.0344 0.863 0.01260 -0.839
After 8.8 1.3 2.93 -1.630 0.0549 0.838 0.11100 -1.950
After 7.2 2.8 3.40 -0.597 0.0309 0.865 0.00790 -0.704
After 6.2 2.8 3.69 -0.887 0.0221 0.860 0.01230 -1.040
After 4.0 3.7 4.33 -0.625 0.0197 0.864 0.00541 -0.734

Recall that in the output of augment()

  • .fitted: \(\widehat{Y} = H \times Y= X \times \widehat{\beta}\)
  • .resid: \(\widehat{\epsilon} = Y - \widehat{Y}\) residuals, \(\sim (\text{Id}_n - H) \times \epsilon\)
  • .hat: diagonal coefficients of Hat matrix \(H\)
  • .sigma: is meant to be the estimated standard deviation of components of \(\widehat{Y}\)

Compute the share of explained variance

Solution
Code
whiteside_aug %$% {
  1 - (var(.resid)/(var(Gas)))
}
[1] 0.4668366
Code
# with(whiteside_aug,
#   1 - (var(.resid)/(var(Gas)))

Plot residuals against fitted values

Solution
Code
diag_1 <- whiteside_aug %>% 
  ggplot() +
  aes(x=.fitted, y=.resid)+
  geom_point(aes(shape= Insul), size=1, color="black") +
  geom_smooth(formula = y ~ x,
              method="loess",
              se=F,
              linetype="dotted",
              linewidth=.5,
              color="black") +
  geom_hline(yintercept = 0, linetype="dashed") +
  xlab("Fitted values") +
  ylab("Residuals)") +
  labs(caption = "Residuals versus Fitted")

Fitted against square root of standardized residuals.

Solution
Code
diag_3 <- whiteside_aug %>%
  ggplot() +
  aes(x=.fitted, y=sqrt(abs(.std.resid))) +
  geom_smooth(formula = y ~ x,
              se=F,
              method="loess",
              linetype="dotted",
              linewidth=.5,
              color="black") +
  xlab("Fitted values") +
  ylab("sqrt(|standardized residuals|)") +
  geom_point(aes(shape=Insul), size=1, alpha=1) +
  labs(caption = "Scale location")
Solution
Code
diag_2 <- whiteside_aug %>% 
  ggplot() +
  aes(sample=.std.resid) +
  geom_qq(size=.5, alpha=.5) +
  stat_qq_line(linetype="dotted",
              linewidth=.5,
              color="black") +
  coord_fixed() +
  labs(caption="Residuals qqplot") +
  xlab("Theoretical quantiles") +
  ylab("Empirical quantiles of standadrdized residuals")

TAF

Solution
Code
(diag_1 + diag_2 + diag_3 + guide_area()) + 
  plot_layout(guides="collect") +
  plot_annotation(title=glue("{cur_dataset} dataset"),
                  subtitle = glue("Regression diagnostic  {deparse(lm0$call$formula)}"), caption = 'The fact that the sign of residuals depend on Insul shows that our modeling is too naive.\n The qqplot suggests that the residuals are not collected from Gaussian homoschedastic noise.'
                  )

Taking into account Insulation

Design a formula that allows us to take into account the possible impact of Insulation. Insulation may impact the relation between weekly Gas consumption and average external Temperature in two ways. Insulation may modify the Intercept, it may also modify the slope, that is the sensitivity of Gas consumption with respect to average external Temperature.

Have a look at formula documentation (?formula).

Solution
Code
lm1 <- lm(Gas ~ Temp * Insul, data = whiteside)

Check the design using function model.matrix(). How can you relate this augmented design and the one-hot encoding of variable Insul?

Solution
Code
model.matrix(lm1) |>
  head()
  (Intercept) Temp InsulAfter Temp:InsulAfter
1           1 -0.8          0               0
2           1 -0.7          0               0
3           1  0.4          0               0
4           1  2.5          0               0
5           1  2.9          0               0
6           1  3.2          0               0
Solution
Code
lm1 %>% 
  tidy_lm()
Linear regrression. Dataset: whiteside, Formula: Gas ~ Temp * Insul
term estimate std.error statistic p.value
(Intercept) 6.85 0.14 50.41 0.00
Temp −0.39 0.02 −17.49 0.00
InsulAfter −2.13 0.18 −11.83 0.00
Temp:InsulAfter 0.12 0.03 3.59 0.00
Solution
Code
lm0 |>
  glance_lm()
Dataset: whiteside, Formula: Gas ~ Temp
r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC deviance df.residual nobs
0.47 0.46 0.86 47.28 6.55e-09 1 -70.04 146.07 152.15 39.99 54 56
Code
lm1 %>% 
  glance_lm()
Dataset: whiteside, Formula: Gas ~ Temp * Insul
r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC deviance df.residual nobs
0.93 0.92 0.32 222.33 1.23e-29 3 -14.1 38.2 48.33 5.43 52 56
Solution
Code
p +
  geom_smooth(formula='y ~ poly(x, 2)',linewidth=.5, color="black",linetype="dashed",  method="lm", se=FALSE)+
  aes(color=Insul) +
  geom_smooth(aes(linetype=Insul), 
              formula='y ~ x',linewidth=.5, color="black", method="lm", se=FALSE) +
  scale_color_manual(values= c("Before"="red", "After"="blue")) +
  geom_abline(intercept = 6.8538, slope=-.3932, color="red") +
  geom_abline(intercept = 6.8538 - 2.13, slope=-.3932 +.1153, color="blue") + labs(
    title=glue("{cur_dataset} dataset"),
    subtitle = glue("Regression: {deparse(lm1$call$formula)}")
    )

Solution
Code
whiteside_aug1 <-  augment(lm1, whiteside)

(diag_1 %+% whiteside_aug1) +
(diag_2 %+% whiteside_aug1) +
(diag_3 %+% whiteside_aug1) +  
 guide_area() +
  plot_layout(guides = "collect") +
  plot_annotation(title=glue("{cur_dataset} dataset"),
                  subtitle = glue("Regression diagnostic  {deparse(lm1$call$formula)}"), caption = 'One possible outlier.\n Visible on all three plots.'
                  )

The formula argument defines the design matrix and the Least-Squares problem used to estimate the coefficients.

Function model.matrix() allows us to inspect the design matrix.

Solution
Code
model.matrix(lm1) %>% 
  as_tibble() %>% 
  mutate(Insul=ifelse(InsulAfter,"After", "Before")) %>% 
  ungroup() %>% 
  DT::datatable(caption=glue("Design matrix for {deparse(lm1$call$formula)}"))
Code
X <- model.matrix(lm1)

In order to solve le Least-Square problems, we have to compute \[(X^T \times X)^{-1} \times X^T\] This can be done in several ways.

lm() uses QR factorization.

Solution
Code
Q <- qr.Q(lm1$qr)
R <- qr.R(lm1$qr)  # R is upper triangular 

norm(X - Q %*% R, type="F") # QR Factorization
[1] 1.753321e-14
Code
signif(t(Q) %*% Q, 2)      # Q's columns form an orthonormal family
         [,1]     [,2]     [,3]    [,4]
[1,]  1.0e+00 -1.4e-17  3.1e-17 1.7e-16
[2,] -1.4e-17  1.0e+00 -3.5e-17 1.4e-17
[3,]  3.1e-17 -3.5e-17  1.0e+00 0.0e+00
[4,]  1.7e-16  1.4e-17  0.0e+00 1.0e+00
Code
H <- Q %*% t(Q)             # The Hat matrix 

norm(X - H %*% X, type="F") # H leaves X's columns invariant
[1] 1.758479e-14
Code
norm(H - H %*% H, type="F") # H is idempotent
[1] 7.993681e-16
Code
# eigen(H, symmetric = TRUE, only.values = TRUE)$values
Code
sum((solve(t(X) %*% X) %*% t(X) %*% whiteside$Gas - lm1$coefficients)^2)
[1] 3.075652e-29

Once we have the QR factorization of \(X\), solving the normal equations boils down to inverting a triangular matrix.

Code
sum((solve(R) %*% t(Q) %*% whiteside$Gas - lm1$coefficients)^2)
[1] 2.050287e-29
Code
#matador::mat2latex(signif(solve(t(X) %*% X), 2))

\[ (X^T \times X)^{-1} = \begin{bmatrix} 0.18 & -0.026 & -0.18 & 0.026\\ -0.026 & 0.0048 & 0.026 & -0.0048\\ -0.18 & 0.026 & 0.31 & -0.048\\ 0.026 & -0.0048 & -0.048 & 0.0099 \end{bmatrix} \]

Solution
Code
whiteside_aug1 %>% 
  glimpse()
Rows: 56
Columns: 9
$ Insul      <fct> Before, Before, Before, Before, Before, Before, Before, Bef…
$ Temp       <dbl> -0.8, -0.7, 0.4, 2.5, 2.9, 3.2, 3.6, 3.9, 4.2, 4.3, 5.4, 6.…
$ Gas        <dbl> 7.2, 6.9, 6.4, 6.0, 5.8, 5.8, 5.6, 4.7, 5.8, 5.2, 4.9, 4.9,…
$ .fitted    <dbl> 7.168419, 7.129095, 6.696532, 5.870731, 5.713435, 5.595463,…
$ .resid     <dbl> 0.031581243, -0.229094875, -0.296532170, 0.129269357, 0.086…
$ .hat       <dbl> 0.22177670, 0.21586370, 0.15721835, 0.07782904, 0.06755399,…
$ .sigma     <dbl> 0.3261170, 0.3241373, 0.3230041, 0.3256103, 0.3259138, 0.32…
$ .cooksd    <dbl> 0.0008751645, 0.0441520664, 0.0466380672, 0.0036646607, 0.0…
$ .std.resid <dbl> 0.11083298, -0.80096122, -1.00001423, 0.41675591, 0.2775375…

Understanding .fitted column

Solution
Code
sum((predict(lm1, newdata = whiteside) - whiteside_aug1$.fitted)^2)
[1] 0
Code
sum((H %*% whiteside_aug1$Gas - whiteside_aug1$.fitted)^2)
[1] 3.478877e-28

Understanding .resid

Solution
Code
sum((whiteside_aug1$.resid + H %*% whiteside_aug1$Gas - whiteside_aug1$Gas)^2)
[1] 3.461127e-28

Understanding .hat

Solution
Code
sum((whiteside_aug1$.hat - diag(H))^2)
[1] 0

Understanding .std.resid

Solution
Code
sigma_hat <- sqrt(sum(lm1$residuals^2)/lm1$df.residual)

lm1 %>% glance() 
# A tibble: 1 × 12
  r.squared adj.r.squared sigma statistic  p.value    df logLik   AIC   BIC
      <dbl>         <dbl> <dbl>     <dbl>    <dbl> <dbl>  <dbl> <dbl> <dbl>
1     0.928         0.924 0.323      222. 1.23e-29     3  -14.1  38.2  48.3
# ℹ 3 more variables: deviance <dbl>, df.residual <int>, nobs <int>

\[ \widehat{r}_i = \frac{\widehat{\epsilon}_i}{\widehat{\sigma} \sqrt{1 - H_{i,i}}} \]

Code
sum((sigma_hat * sqrt(1 -whiteside_aug1$.hat) * whiteside_aug1$.std.resid - whiteside_aug1$.resid)^2)
[1] 4.471837e-28

Understanding column .sigma

Solution

Column .sigma contains the leave-one-out estimates of \(\sigma\), that is whiteside_aug1$.sigma[i] is the estimate of \(\sigma\) you obtain by leaving out the i row of the dataframe.

There is no need to recompute everything for each sample element.

\[ \widehat{\sigma}^2_{(i)} = \widehat{\sigma}^2 \frac{n-p-1- \frac{\widehat{\epsilon}_i^2}{\widehat{\sigma}^2 {(1 - H_{i,i})}}\frac{}{}}{n-p-2} \]